59 research outputs found

    Are Microcontrollers Ready for Deep Learning-Based Human Activity Recognition?

    Get PDF
    The last decade has seen exponential growth in the field of deep learning with deep learning on microcontrollers a new frontier for this research area. This paper presents a case study about machine learning on microcontrollers, with a focus on human activity recognition using accelerometer data. We build machine learning classifiers suitable for execution on modern microcontrollers and evaluate their performance. Specifically, we compare Random Forests (RF), a classical machine learning technique, with Convolutional Neural Networks (CNN), in terms of classification accuracy and inference speed. The results show that RF classifiers achieve similar levels of classification accuracy while being several times faster than a small custom CNN model designed for the task. The RF and the custom CNN are also several orders of magnitude faster than state-of-the-art deep learning models. On the one hand, these findings confirm the feasibility of using deep learning on modern microcontrollers. On the other hand, they cast doubt on whether deep learning is the best approach for this application, especially if high inference speed and, thus, low energy consumption is the key objective

    TSCH for Long Range Low Data Rate Applications

    Get PDF

    Distributed Ledger Technology and the Internet of Things:A Feasibility Study

    Get PDF

    Online Feature Selection for Activity Recognition using Reinforcement Learning with Multiple Feedback

    Get PDF
    Recent advances in both machine learning and Internet-of-Things have attracted attention to automatic Activity Recognition, where users wear a device with sensors and their outputs are mapped to a predefined set of activities. However, few studies have considered the balance between wearable power consumption and activity recognition accuracy. This is particularly important when part of the computational load happens on the wearable device. In this paper, we present a new methodology to perform feature selection on the device based on Reinforcement Learning (RL) to find the optimum balance between power consumption and accuracy. To accelerate the learning speed, we extend the RL algorithm to address multiple sources of feedback, and use them to tailor the policy in conjunction with estimating the feedback accuracy. We evaluated our system on the SPHERE challenge dataset, a publicly available research dataset. The results show that our proposed method achieves a good trade-off between wearable power consumption and activity recognition accuracy

    The Contiki-NG open source operating system for next generation IoT devices

    Get PDF
    Contiki-NG (Next Generation) is an open source, cross-platform operating system for severely constrained wireless embedded devices. It focuses on dependable (reliable and secure) low-power communications and standardised protocols, such as 6LoWPAN, IPv6, 6TiSCH, RPL, and CoAP. Its primary aims are to (i) facilitate rapid prototyping and evaluation of Internet of Things research ideas, (ii) reduce time-to-market for Internet of Things applications, and (iii) provide an easy-to-use platform for teaching embedded systems-related courses in higher education. Contiki-NG started as a fork of the Contiki OS and retains many of its original features. In this paper, we discuss the motivation behind the creation of Contiki-NG, present the most recent version (v4.7), and highlight the impact of Contiki-NG through specific examples

    Temperature-Resilient Time Synchronization for the Internet of Things

    Get PDF
    Networks deployed in real-world conditions have to cope with dynamic, unpredictable environmental temperature changes. These changes affect the clock rate on network nodes, and can cause faster clock de-synchronization compared to situations where devices are operating under stable temperature conditions. Wireless network protocols such as Time-Slotted Channel Hopping (TSCH) from the IEEE 802.15.4-2015 standard are affected by this problem, since they require tight clock synchronization among all nodes for the network to remain operational. This paper proposes a method for autonomously compensating temperature-dependent clock rate changes. After a calibration stage, nodes continuously perform temperature measurements to compensate for clock drifts at run-time. The method is implemented on low-power IoT nodes and evaluated through experiments in a temperature chamber, indoor and outdoor environments, as well as with numerical simulations. The results show that applying the method reduces the maximum synchronization error more than 10 times. In this way, the method allows reduce the total energy spent for time synchronization, which is practically relevant concern for low data rate, low energy budget TSCH networks, especially those exposed to environments with changing temperature.This work was performed under the SPHERE IRC funded by the UK Engineering and Physical Sciences Research Council (EPSRC), Grant EP/K031910/1. It was also partly funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 761586 (5G-CORAL), the distributed environment Ecare@Home funded by the Swedish Knowledge Foundation, and by a grant from CPER Nord-PasdeCalais/ FEDER DATA
    • …
    corecore